447 research outputs found

    Consumer finance: challenges for operational research

    No full text
    Consumer finance has become one of the most important areas of banking, both because of the amount of money being lent and the impact of such credit on global economy and the realisation that the credit crunch of 2008 was partly due to incorrect modelling of the risks in such lending. This paper reviews the development of credit scoring—the way of assessing risk in consumer finance—and what is meant by a credit score. It then outlines 10 challenges for Operational Research to support modelling in consumer finance. Some of these involve developing more robust risk assessment systems, whereas others are to expand the use of such modelling to deal with the current objectives of lenders and the new decisions they have to make in consumer finance. <br/

    The relationship between default and economic cycles for retail portfolios across countries

    No full text
    In this paper, we collect consumer delinquency data from several economic shocks in order to study the creation of stress-testing models. We leverage the dual-time dynamics modeling technique to better isolate macroeconomic impacts whenever vintage-level performance data is available. The stress-testing models follow a framework described here of focusing on consumer-centric macroeconomic variables so that the models are as robust as possible when predicting the impacts of future shocks

    When to rebuild or when to adjust scorecards

    No full text
    Data-based scorecards, such as those used in credit scoring, age with time and need to be rebuilt or readjusted. Unlike the huge literature on modelling the replacement and maintenance of equipment there have been hardly any models that deal with this problem for scorecards. This paper identifies an effective way of describing the predictive ability of the scorecard and from this describes a simple model for how its predictive ability will develop. Using a dynamic programming approach one is then able to find when it is optimal to rebuild and when to readjust a scorecard. Failing to readjust or rebuild a scorecard when they aged was one of the defects in credit scoring identified in the investigations into the sub-prime mortgage crisis

    Improving credit scoring by differentiating default behaviour

    No full text
    We present a methodology for improving credit scoring models by distinguishing two forms of rational behaviour of loan defaulters. It is common knowledge among practitioners that there are two types of defaulters, those who do not pay because of cash flow problems (‘Can’t Pay’), and those that do not pay because of lack of willingness to pay (‘Won’t Pay’). This work proposes to differentiate them using a game theory model that describes their behaviour. This separation of behaviours is represented by a set of constraints that form part of a semi-supervised constrained clustering algorithm, constructing a new target variable summarizing relevant future information. Within this approach the results of several supervised models are benchmarked, in which the models deliver the probability of belonging to one of these three new classes (good payers, ‘Can’t Pays’, and ‘Won’t Pays’). The process improves classification accuracy significantly, and delivers strong insights regarding the behaviour of defaulter

    Stress testing credit card portfolios: an application in South Africa

    No full text
    Motivated by a real problem, this study aims to develop models to conduct stress testing on credit card portfolios. Two modelling approaches were extended to include the impact of lenders’ actions within the model. The first approach was a regression model of the aggregate losses based on economic variables with autocorrelations of the errors. The second approach was a set of vintage-level models that highlighted the months-on-book effect on credit losses. A case study using the models was described using South African credit card data. In this case, the models were used to stress test the credit card portfolio under several economic scenarios

    Dynamic affordability assessment: predicting an applicant’s ability to repay over the life of the loan

    No full text
    In the credit decision-making process, both an applicant's creditworthiness and their affordability should be assessed. While credit scoring focuses on creditworthiness, affordability is often checked on the basis of current income and estimated current consumption as well as existing debts stated in a credit report. Contrary to that static approach, a theoretical framework for dynamic affordability assessment is proposed in this paper. In this approach, both income and consumption are allowed to vary over time and their changes are described with random effects models for panel data. The models are derived from the economic literature, including the Euler equation of consumption. A simulation is run on their basis and predicted time series are generated for a given applicant. For each pair of the predicted income and consumption time series, the applicant's ability to repay is checked over the life of the loan, for all possible instalment amounts. As a result, a probability of default is assigned to each amount, which can help find the maximum affordable instalment. This is illustrated with an example based on artificial data. Assessing affordability over the loan repayment period as well as taking into account variability of income and expenditure over time are in line with recommendations of the UK Office of Fair Trading and the Financial Services Authority. In practice, the suggested approach could contribute to responsible lending

    Mining whole sample mass spectrometry proteomics data for biomarkers: an overview

    No full text
    In this paper we aim to provide a concise overview of designing and conducting an MS proteomics experiment in such a way as to allow statistical analysis that may lead to the discovery of novel biomarkers. We provide a summary of the various stages that make up such an experiment, highlighting the need for experimental goals to be decided upon in advance. We discuss issues in experimental design at the sample collection stage, and good practise for standardising protocols within the proteomics laboratory. We then describe approaches to the data mining stage of the experiment, including the processing steps that transform a raw mass spectrum into a useable form. We propose a permutation-based procedure for determining the significance of reported error rates. Finally, because of its general advantages in speed and cost, we suggest that MS proteomics may be a good candidate for an early primary screening approach to disease diagnosis, identifying areas of risk and making referrals for more specific tests without necessarily making a diagnosis in its own right. Our discussion is illustrated with examples drawn from experiments on bovine blood serum conducted in the Centre for Proteomic Research (CPR) at Southampton University

    The duration derby : a comparison of duration based strategies in asset liability management

    Get PDF
    Macaulay duration matched strategy is a key tool in bond portfolio immunization. It is well known that if term structures are not at or changes are not parallel, then Macaulay duration matched portfolio can not guarantee adequate immunization. In this paper the approximate duration is proposed to measure the bond price sensitivity to changes of interest rates of non- at term structures. Its performance in immunization is compared with those of Macaulay, partial and key rate durations using the US Treasury STRIPS and Bond data. Approximate duration turns out to be a possible contender in asset liability management: it does not assume any particular structures or patterns of changes of interest rates, it does not need short selling of bonds, and it is easy to set up and rebalance the optimal portfolio with linear programming

    Some statistical models for durations and their applications in finance

    Get PDF
    We first consider a new class of time series models (introduced by Engle and Russell (1998)) use in statistical applications in finance. These models treat the time between events (durations) as a stochastic process and the corresponding durations are modelled using a theory similar to that of autoregressive processes. This new class of time series models is called Autoregressive Conditional Duration (ACD) models. Various extensions and the statistical properties of this class of ACD models are given. We also suggest some alternative models for durations arising from the market microstructure literature. An estimation procedure is discussed. The theory is illustrated using a potential application in finance

    Does segmentation always improve model performance in credit scoring?

    No full text
    Credit scoring allows for the credit risk assessment of bank customers. A single scoring model (scorecard) can be developed for the entire customer population, e.g. using logistic regression. However, it is often expected that segmentation, i.e. dividing the population into several groups and building separate scorecards for them, will improve the model performance. The most common statistical methods for segmentation are the two-step approaches, where logistic regression follows Classification and Regression Trees (CART) or Chi-squared Automatic Interaction Detection (CHAID) trees etc. In this research, the two-step approaches are applied as well as a new, simultaneous method, in which both segmentation and scorecards are optimised at the same time: Logistic Trees with Unbiased Selection (LOTUS). For reference purposes, a single-scorecard model is used. The above-mentioned methods are applied to the data provided by two of the major UK banks and one of the European credit bureaus. The model performance measures are then compared to examine whether there is improvement due to the segmentation methods used. It is found that segmentation does not always improve model performance in credit scoring: for none of the analysed real-world datasets, the multi-scorecard models perform considerably better than the single-scorecard ones. Moreover, in this application, there is no difference in performance between the two-step and simultaneous approache
    corecore